Vehicle Counting, Classification and Detection are turning out to be progressively significant in the field of Highway management. In any case, because of the various sizes of vehicles, their identification stays a test that straight forwardly influences the exactness of vehicle counts. To resolve this issue, we propose a vehicle Counting, Classification and Detection framework. In the proposed system we uses Yolov3 for vehicle detection and counting of vehicles from still images that can detect, classify and count numerous vehicles from CCTV footage.
Introduction
I. INTRODUCTION
From the past few years the traf?c control has turned into a serious issue for society. A variety of issues ranging from traf?c blockage, absence of vehicle parking, pollution etc. have hassled humans. It has achieved major break in the recent era. However, the detection and classi?cation of vehicles is a demanding concern.
The scope in this area is huge because of the variety of challenging features that vehicles possess ranging from edges, colors, shadows, corners, textures, etc. Due to the progress in hardware and reduced manufacturing expenses, the amount of surveillance devices has risen in the past few years, and video cameras are of high resolutions used in these systems. As a result, large amount of video sources generates surprising volume of information that needs to be analysed and realized, but to examine the amount of information is too high for human operators. Therefore, researchers are more take the bene?t in all probability from technology like Intelligent Transportation System
An important study of the surveillance system is the detection of different vehicle types. The main phase in traf?c management software is the classi?cation of vehicles. Prior information of the model and vehicle type is required, because it allows for queries as to know “which direction the vehicle has passed and at what time?”. Therefore, feature extraction and classi?cation of vehicles cover a vast scope of traf?c management applications.
II. OBJECTIVE
The traffic surveillance system is to Count, Classify and Detect the vehicles but they can be used to do complex tasks such as driver activity recognition, lane recognition.
III. LITERATURE REVIEW
Murugan, V., and V. R. Vijaykumar, “Automatic Moving Vehicle Detection and Classification Based on Artificial Neural Fuzzy Inference System”, Wireless Personal Communications, 100(3), pp. 745-766, 2018.
Audebert, Nicolas, Bertrand Le Saux, and Sébastien Lefèvre, “Segment-before-detect: Vehicle detection and classification through semantic segmentation of aerial images” Remote Sensing, 9(4), p. 368, 2017.
Seenouvong, Nilakorn, Ukrit Watchareeruetai, Chaiwat Nuthong, Khamphong Khongsomboon, and Noboru Ohnishi. "Vehicle detection and classification system based on virtual detection zone." In Proceedings of the International Joint Conference on Computer Science and Software Engineering (JCSSE), IEEE, pp. 1-5, 2016.
Dong, Zhen, Yuwei Wu, Mingtao Pei, and Yunde Jia, “Vehicle type classification using a semi supervised convolutional neural network”, IEEE transactions on intelligent transportation systems, 16(4), 2247-2256, 2015.
Banu, A. Shakin, and P. Vasuki, “Video based vehicle detection using morphological operation and hog feature extraction”, ARPN Journal of Engineering and Applied Sciences, 10(4), pp. 1866-1871, 2015.
Geiger, Andreas, Philip Lenz, and Raquel Urtasun. “Are we ready for autonomous driving? the kitti vision benchmark suite”, In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, IEEE, pp. 3354-3361, 2012.
IV. METHODS USED FOR DETECTION
In the steps of processing of video, the initial stage is the image localization or detection of vehicle. Vehicle detection that includes expression of motion, tracking and behavior analysis which are the basis for further processing to achieve classi?cation success rate. There are two approaches in vehicle detection, one is appearance based and the other is motion based. Parameters such as texture, color, and shape of a vehicle are considered for appearance based approach. Whereas, the moving characteristics are used to differentiate the vehicles from the static background scenes in motion-based approach.
A. Motion Based Features
In computer vision technology, Motion detection is a signi?cant job. The important characteristic of interest in traf?c scenes is only the moving vehicles are of interest. InMotion detection technique, the foreground objects which are in motion are set apart from the still background of an image. For differentiating the moving traf?c from the stationary background, the motion indications are utilized and they can be divided into temporal frame differencing approach which considers past two or three successive frames, a background subtraction approach to construct background model by using frame history and the instantaneous pixel speed on the image surface is used by optical ?ow approach.
B. Frame Differencing
In temporal frame differencing technique, the difference in the pixels is calculated among two consecutive frames. Whereas, using a threshold value the moving foreground region are found out. The detection rate is improved by using three consecutive frames, whereto obtain the moving target region the dual inter-frame subtraction is considered and binarized proceeding the bitwise AND operation.
C. Appearance Based Features
In terms of color, texture and shape, the stereo vision of an object can be classi?ed. Methods based on these features usually uses prior data for modeling. It compares the derived two-dimensional image features to the real world three-dimensional features by using the feature extraction method. Unlike motion-based approaches, appearance-based approaches can distinguish ?xed objects and detect them.
D. Part Based Model
In this approach, the objects are divided into several smaller parts and modeled in part based detection models. Using the spatial differences between these parts, has proved to be very widespread method for vehicle detection. To improve the vehicle detection rate and resolve the occlusion problem, the vehicles in the image is divided into front, side, and rear parts [14]. For robust vehicle detection, the trained deformable part model is used.
E. Neural Networks
There are six major stages in the vehicle detection through neural networks. They are loading the data set, designing the convolutional neural network, configuring training options, training the object detector using Faster R-CNN, and evaluating the trained detector. The above processes are discussed as follows.
V. PROCESS FLOW
Vehicle counting, classification, and detection are important tasks in traffic management and surveillance systems. The process typically involves the following steps:
Data Collection: This involves collecting video footage of the traffic from cameras or other sensors.
Pre-processing: The collected data may require pre-processing to remove any noise or irrelevant data that can affect the accuracy of the analysis.
Object Detection: The next step involves detecting the presence of vehicles in the video footage using computer vision techniques such as object detection algorithms.
Object Tracking:Once the vehicles are detected, the system tracks them over time to determine their relevant information.
Vehicle Classification: The system then classifies the vehicles based on their type, such as cars, trucks, motorcycles, or buses, using machine learning algorithms.
Data Analysis: The final step involves analyzing the data collected during the entire process to provide insights into vehicle types, and other relevant information.
OpenCV: OpenCV is a Python library that allows you to perform image processing and computer vision tasks. It provides a wide range of features, including object detection, face recognition, and tracking.
VI. VEHICLE DETECTION
Vehicle detection is a computer vision technology that involves detecting the presence and location of vehicles in images or video streams captured by cameras.The goal of vehicle detection is to identify vehicles in real-time for various applications such as traffic management, surveillance, and self-driving cars.
VII. VEHICLE CLASSIFICATION
Vehicle classification is the process of categorizing vehicles into different types based on their characteristics and features such as size, shape, weight, and purpose. The goal of vehicle classification is to identify the type of vehicle present in the image or video stream, which can be useful for applications such as traffic analysis, toll collection, and security. The output of a vehicle classification algorithm is the type of vehicle present in the image or video stream, which can be useful for further analysis and decision-making.
Conclusion
In this paper, a detailed overview of literature on video-based traffic monitoring and classification systems using computer vision methods is presented. The purpose of this study is to support the researcher in the detection, classification and availability of car data sets of vehicles. The most prevalent issues in this field are the biased form of datasets and the distinct vehicle types with the same size and form, which makes it more difficult to categorize them.
References
[1] Foucher, Philippe, Yazid Sebsadji, Jean-Philippe Tarel, Pierre Charbonnier, and Philippe Nicolle, “Detection and recognition of urban road markings using images”, In Proceeding of the International IEEE Conference on Intelligent Transportation Systems (ITSC), IEEE, pp. 1747-1752., 2011.
[2] Oliveira, Miguel, Vitor Santos, and Angel D. Sappa, “Multimodal inverse perspective mapping”, Information Fusion, 24, pp. 108-121, 2015.
[3] Kim, Jong Bae, and Hang Joon Kim, “Efficient region-based motion segmentation for a video monitoring system”, Pattern recognition letters, 24(1-3), pp. 113-128, 2003.
[4] Li, Xiaobo, Zhi-Qiang Liu, and Ka-Ming Leung, “Detection of vehicles from traffic scenes using fuzzy integrals”, Pattern Recognition, 35(4), pp. 967-980, 2002.
[5] Xia, Yingjie, Chunhui Wang, Xingmin Shi, and Luming Zhang, “Vehicles overtaking detection using RGB-D data”, Signal Processing, 112, 98-109, 2015.
[6] Zhang, Wei, QM Jonathan Wu, and Hai bing Yin, “Moving vehicles detection based on adaptive motion histogram”, Digital Signal Processing, 20(3), pp. 793- 805, 2010.
[7] Ji, Wenyang, Lingjun Tang, Dedi Li, Wenming Yang, and Qingmin Liao, “Video-based construction vehicles detection and its application in intelligent monitoring system”, CAAI Transactions on Intelligence Technology, 1(2), pp. 162-172, 2016.
[8] Anandhalli, Mallikarjun, and Vishwanath P. Baligar, “Improvised approach using background subtraction for vehicle detection”, In Proceedings of the IEEE International Advance Computing Conference (IACC), IEEE, pp. 303-308, 2015.
[9] Gibson, James J, “On the analysis of change in the optic array”, Scandinavian Journal of Psychology, 18(1), pp. 161-163, 1977.
[10] Horn, B. K. P., and B. G. Schunck, “Determining Optical Flow Artificial Intelligence”, Vol. 17, pp. 185- 203, 1981.
[11] Weber, Markus, Max Welling, and Pietro Perona, “Unsupervised learning of models for recognition”, In European conference on computer vision, Springer, pp. 18-32, 2000.
[12] Handmann, Uwe, Thomas Kalinke, Christos Tzomakas, Martin Werner, and W. V. Seelen, “An image processing system for driver assistance”, Image and Vision Computing, 18(5), pp. 367-376, 2000.